Skip to content

chore: upgrade Flink to 2.2.0 and CP to 8.2.0 in Docker deployment#136

Open
Dave Troiano (davetroiano) wants to merge 1 commit intomasterfrom
flink-2.2.0-upgrade
Open

chore: upgrade Flink to 2.2.0 and CP to 8.2.0 in Docker deployment#136
Dave Troiano (davetroiano) wants to merge 1 commit intomasterfrom
flink-2.2.0-upgrade

Conversation

@davetroiano
Copy link
Copy Markdown
Contributor

No description provided.

@davetroiano Dave Troiano (davetroiano) requested a review from a team as a code owner April 17, 2026 19:44
Copilot AI review requested due to automatic review settings April 17, 2026 19:44
@confluent-cla-assistant
Copy link
Copy Markdown

🎉 All Contributor License Agreements have been signed. Ready to merge.
Please push an empty commit if you would like to re-run the checks to verify CLA status for all contributors.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Upgrades the Docker-based Flink SQL tutorial deployment to Flink 2.2.0 and Confluent Platform 8.2.0, aligning container images and Flink configuration for the newer runtime.

Changes:

  • Bump Flink base images / build version to 2.2.0-scala_2.12-java17.
  • Replace legacy flink-conf.yaml with a new hierarchical config.yaml under docker/flink/conf/.
  • Update docker-compose-flinksql.yml to CP 8.2.0 images and adjust exposed services/ports.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
docker/flink/conf/flink-conf.yaml Removes the legacy Flink configuration file from the Docker deployment.
docker/flink/conf/config.yaml Adds the Flink 2.x-style hierarchical configuration used by the Docker images.
docker/flink/Makefile Updates the Flink build/push tag to 2.2.0-scala_2.12-java17.
docker/flink/Dockerfile.flink-sql-client-kafka Updates the SQL client image to Flink 2.2.0 and refreshes downloaded connector JARs.
docker/flink/Dockerfile.flink-kafka Updates the Flink runtime image to Flink 2.2.0 and refreshes downloaded connector JARs.
docker/docker-compose-flinksql.yml Upgrades Kafka/Schema Registry images to 8.2.0 and updates the Flink service images to 2.2.0.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

parallelism:
default: 1
rest:
address: flink-jobmanager
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The REST endpoint is exposed on the host via 8081:8081 in docker-compose, but the Flink config here does not set a REST bind address. If Flink defaults to binding REST to localhost inside the container, the Web UI/REST API will not be reachable from outside the container. Consider explicitly setting the REST bind address (and bind port if needed) in this config (e.g., bind to 0.0.0.0).

Suggested change
address: flink-jobmanager
address: flink-jobmanager
bind-address: 0.0.0.0

Copilot uses AI. Check for mistakes.
Comment on lines +6 to +7
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar;
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These images download connector JARs from Maven Central during the Docker build without any integrity verification (checksum/signature). This is a supply-chain risk and can also lead to non-reproducible builds if artifacts are ever replaced. Consider pinning and verifying checksums (or vendoring the artifacts / using a build that resolves deps via Gradle/Maven and copies them into the image).

Suggested change
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar;
wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar.sha512; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar.sha512; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar.sha512; \
cd /opt/flink/lib && \
sha512sum -c flink-sql-connector-kafka-4.0.1-2.0.jar.sha512 && \
sha512sum -c flink-json-2.2.0.jar.sha512 && \
sha512sum -c flink-sql-avro-confluent-registry-2.2.0.jar.sha512 && \
rm -f \
flink-sql-connector-kafka-4.0.1-2.0.jar.sha512 \
flink-json-2.2.0.jar.sha512 \
flink-sql-avro-confluent-registry-2.2.0.jar.sha512

Copilot uses AI. Check for mistakes.
Comment on lines +5 to +7
RUN wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar;
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These images download connector JARs from Maven Central during the Docker build without any integrity verification (checksum/signature). This is a supply-chain risk and can also lead to non-reproducible builds if artifacts are ever replaced. Consider pinning and verifying checksums (or vendoring the artifacts / using a build that resolves deps via Gradle/Maven and copies them into the image).

Suggested change
RUN wget -P /opt/flink/lib https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar;
RUN set -eux; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-kafka/4.0.1-2.0/flink-sql-connector-kafka-4.0.1-2.0.jar.sha512; \
(cd /opt/flink/lib && sha512sum -c flink-sql-connector-kafka-4.0.1-2.0.jar.sha512); \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-json/2.2.0/flink-json-2.2.0.jar.sha512; \
(cd /opt/flink/lib && sha512sum -c flink-json-2.2.0.jar.sha512); \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar; \
wget -P /opt/flink/lib/ https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/2.2.0/flink-sql-avro-confluent-registry-2.2.0.jar.sha512; \
(cd /opt/flink/lib && sha512sum -c flink-sql-avro-confluent-registry-2.2.0.jar.sha512); \
rm -f /opt/flink/lib/*.sha512

Copilot uses AI. Check for mistakes.
Comment on lines 55 to 58
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 10
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FLINK_PROPERTIES here duplicates settings that appear to be baked into the image via /opt/flink/conf/config.yaml (e.g., jobmanager.rpc.address and taskmanager.numberOfTaskSlots). Keeping the same config in both places increases the chance they drift; consider relying on a single source of truth (either the config file or container-specific FLINK_PROPERTIES) or adding a comment explaining why overrides are needed.

Copilot uses AI. Check for mistakes.
Comment on lines +70 to +73
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 10
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FLINK_PROPERTIES here duplicates settings that appear to be baked into the image via /opt/flink/conf/config.yaml (e.g., jobmanager.rpc.address and taskmanager.numberOfTaskSlots). Keeping the same config in both places increases the chance they drift; consider relying on a single source of truth (either the config file or container-specific FLINK_PROPERTIES) or adding a comment explaining why overrides are needed.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants